827 research outputs found

    Anelastic Versus Fully Compressible Turbulent Rayleigh-B\'enard Convection

    Full text link
    Numerical simulations of turbulent Rayleigh-B\'enard convection in an ideal gas, using either the anelastic approximation or the fully compressible equations, are compared. Theoretically, the anelastic approximation is expected to hold in weakly superadiabatic systems with ϵ=ΔT/Tr≪1\epsilon = \Delta T / T_r \ll 1, where ΔT\Delta T denotes the superadiabatic temperature drop over the convective layer and TrT_r the bottom temperature. Using direct numerical simulations, a systematic comparison of anelastic and fully compressible convection is carried out. With decreasing superadiabaticity ϵ\epsilon, the fully compressible results are found to converge linearly to the anelastic solution with larger density contrasts generally improving the match. We conclude that in many solar and planetary applications, where the superadiabaticity is expected to be vanishingly small, results obtained with the anelastic approximation are in fact more accurate than fully compressible computations, which typically fail to reach small ϵ\epsilon for numerical reasons. On the other hand, if the astrophysical system studied contains ϵ∼O(1)\epsilon\sim O(1) regions, such as the solar photosphere, fully compressible simulations have the advantage of capturing the full physics. Interestingly, even in weakly superadiabatic regions, like the bulk of the solar convection zone, the errors introduced by using artificially large values for ϵ\epsilon for efficiency reasons remain moderate. If quantitative errors of the order of 10%10\% are acceptable in such low ϵ\epsilon regions, our work suggests that fully compressible simulations can indeed be computationally more efficient than their anelastic counterparts.Comment: 24 pages, 9 figure

    Reducing BCI calibration time with transfer learning: a shrinkage approach

    Get PDF
    Introduction: A brain-computer interface system (BCI) allows subjects to make use of neural control signals to drive a computer application. Therefor a BCI is generally equipped with a decoder to differentiate between types of responses recorded in the brain. For example, an application giving feedback to the user can benefit from recognizing the presence or absence of a so-called error potential (Errp), elicited in the brain of the user when this feedback is perceived as being ‘wrong’, a mistake of the system. Due to the high inter- and intra- subject variability in these response signals, calibration data needs to be recorded to train the decoder. This calibration session is exhausting and demotivating for the subject. Transfer Learning is a general name for techniques in which data from previous subjects is used as additional information to train a decoder for a new subject, thereby reducing the amount of subject specific data that needs to be recorded during calibration. In this work we apply transfer learning to an Errp detection task by applying single-target shrinkage to Linear Discriminant Analysis (LDA), a method originally proposed by Höhne et. al. to improve accuracy by compensating for inter-stimuli differences in an ERP-speller [1]. Material, Methods and Results: For our study we used the error potential dataset recorded by Perrin et al. in [2]. For 26 subjects each, 340 Errp/nonErrp responses were recorded with a #Errp to #nonErrp ratio of 0.41 to 0.94. 272 responses were available for training the decoder and the remaining 68 responses were left out for testing. For every subject separately we built three different decoders. First, a subject specific LDA decoder was built solely making use of the subject’s own train data. Second, we added the train data of the other 25 subjects to train a global LDA decoder, naively ignoring the difference between subjects. Finally, the single-target-shrinkage method (STS) [1] is used to regularize the parameters of the subject specific decoder towards those of the global decoder. Making use of cross validation this method assigns an optimal weight to the subject specific data and data from previous subjects to be used for training. Figure 1 shows the performance of the three decoders on the test data in terms of AUC as a function of the amount of subject specific calibration data used. Discussion: The subject specific decoder in Figure 1 shows how sensitive the decoding performance is to the amount of calibration data provided. Using data from previously recorded subjects the amount of calibration data, and as such the calibration time, can be reduced as shown by the global decoder. A certain amount of quality is however sacrificed. Making an optimal compromise between the subject specific and global decoder, the single-target-shrinkage decoder allows the calibration time to be reduced by 20% without any change in decoder quality (confirmed by a paired sample t-test giving p=0.72). Significance: This work serves as a first proof of concept in the use of shrinkage LDA as a transfer learning method. More specific, the error potential decoder built with reduced calibration time boosts the opportunity for error correcting methods in BCI

    Switching characters between stimuli improves P300 speller accuracy

    Get PDF
    In this paper, an alternative stimulus presentation paradigm for the P300 speller is introduced. Similar to the checkerboard paradigm it minimizes the occurrence of the two most common causes of spelling errors: adjacency distraction and double flashes. Moreover, in contrast to the checkerboard paradigm, this new stimulus sequence does not increase the time required per stimulus iteration. Our new paradigm is compared to the basic row-column paradigm and the results indicate that, on average, the accuracy is improved

    The complex relationship between interrogation techniques, suspects changing their statement and legal assistance.

    Get PDF
    This study aims to provide more insight in the complex and dynamic relationships between interrogation techniques, changes in suspects’ statements and the presence of a lawyer. In doing so, it shows the importance of taking into account the conditions under which interrogation techniques can elicit statements from suspects. Based on a Dutch sample of 168 police interviews of suspects in homicide cases structural equation modelling is used to analyse (1) the extent to which interrogation techniques mediate suspects changing their statement and (2) the extent to which the presence of a lawyer moderates the relationship between interrogation techniques and suspects changing their statement. The results show that manipulative interrogation techniques mediate the changing statement of silent suspects compared to suspects who give a statement on personal matters or deny only during interviews without a lawyer. Based on the findings it can be concluded that the presence of a lawyer can change the dynamics of police interviews of suspects. This is an important conclusion given the European developments in strengthening the safeguards of the rights of suspects in police custody. The presence of a lawyer might contribute to reducing false confessions, avoid tunnel vision, and prevent miscarriages of justice

    Te hard van stapel gelopen.

    Get PDF
    Hoe goed we ook trachten de samenleving te organiseren, fraude maakt er deel van uit. Dit blijkt uit spraakmakende grote schandalen zoals de Enron-zaak, de Bouwfraude-zaak en de Nigerian letter scams. Maar fraude komt ook op minder geruchtmakende schaal voor, zoals oplichting op veilingsites, op de werkvloer en dergelijke. Fraude doet zich dus in vele verschillende gedaanten voor en maakt vele verschillende mensen tot slachtoffer. Het is dus van groot belang zicht te krijgen op de omvang van fraude, wie de daders en slachtoffers van fraude zijn, waarom fraude gepleegd wordt en in welke mate fraude bestreden kan worden

    Income Inequality Decomposition, Russia 1992-2002: Method and Application

    Get PDF
    Decomposition methods for income inequality measures, such as the Gini index and the members of the Generalised Entropy family, are widely applied. Most methods decompose income inequality into a between (explained) and a within (unexplained) part, according to two or more population subgroups or income sources. In this article, we use a regression analysis for a lognormal distribution of personal income, modelling both the mean and the variance, decomposing the variance as a measure of income inequality, and apply the method to survey data from Russia spanning the first decade of market transition (1992-2002). For the first years of the transition, only a small part of the income inequality could be explained. Thereafter, between 1996 and 1999, a larger part (up to 40%) could be explained, and ‘winner’ and ‘loser’ categories of the transition could be spotted. Moving to the upper end of the income distribution, the self-employed won from the transition. The unemployed were among the losers

    Raadsman bij politieverhoor

    Get PDF
    Samenvatting Probleemstelling Decennia lang wordt er al gediscussieerd over de toelating van de raadsman bij het politiële verdachtenverhoor. In juli 2008 is een tweejarig experiment van start gegaan waarbij de advocaat tot het (eerste) politieverhoor toegelaten wordt. Dit rapport doet verslag van het onderzoek naar dat experiment en de bevindingen die we hebben gedaan. Dat de raadsman nu binnen een experimentele situatie bij het politieverhoor wordt toegelaten moet begrepen worden tegen de achtergrond van internationale ontwikkelingen en een aantal strafzaken waarin de verdachte ten onrechte veroordeeld is, mede op basis van een valse bekentenis. Aanleiding zijn de fouten die in de Schiedammer Parkmoord tijdens het vooronderzoek door politie, Openbaar Ministerie en het Nederlands Forensisch Instituut gemaakt zijn en de daarop gebaseerde verkeerde rechtelijke beslissingen. Deze vormden de aanleiding tot het Programma Versterking Opsporing en Vervolging dat als doel had de waarheidsvinding in strafzaken te optimaliseren. Het programma omvatte maatregelen die enerzijds gericht waren op het verbeteren van de kwaliteit van het politieverhoor en anderzijds op het bevorderen van de transparantie van het politieverhoor. Eén van de maatregelen uit het programma was de invoering van audio dan wel audiovisuele registratie van verhoren in ernstige zaken. In aanvulling op het programma werd bovendien de politieke wens geuit om de advocaat toe te laten tot het politieverhoor. Nadat de Tweede Kamer de motie Dittrich aanvaard had, zegde de minister van Justitie toe tijdelijk een verandering in de procedure van de eerste politiële verdachtenverhoren in te voeren: het ‘experiment raadsman bij politieverhoor’. Het doel van het experiment is te bekijken wat de meerwaarde is van de aanwezigheid van de raadsman op het bevorderen van de transparantie en verifieerbaarheid van het verhoor en het voorkomen van ongeoorloofde pressie. De praktische uitwerking van deze doelstelling betreft een tweeledige verandering van de verhoorsituatie: de advocaat wordt toegelaten tot het verhoor én advocaat en verdachte krijgen voorafgaand aan het verhoor de gelegenheid in beslotenheid met elkaar te overleggen. De invoering van deze tijdelijke (experimentele) maatregel geldt voor alle (voltooide) misdrijven tegen het leven gericht, genoemd in Titel XIX Wetboek van Strafrecht in de regio’s Amsterdam-Amstelland en Rotterdam-Rijnmond. Ten behoeve van het experiment is het ‘protocol pilot raadsman bij politieverhoor van verdachten’ opgesteld, dat voorschrijft hoe alle deelnemers aan de verhoren zich zouden moeten opstellen. Het is belangrijk te vermelden dat raadsman en verdachte volgens het protocol tijdens het verhoor geen contact met elkaar mogen hebben. Daarbij mag de raadsman het verhoor op geen enkele manier verstoren en alleen ingrijpen wanneer het pressieverbod volgens hem overtreden wordt. De advocaat krijgt hiermee dus een passieve rol tijdens het verhoor toebedeeld. De doelstelling van onderhavig onderzoek is de feitelijke gang van zaken rondom het politieverhoor met voorafgaande consultatie en toelating van raadslieden zo zorgvuldig mogelijk in kaart te brengen. De beschrijving van de feitelijke gang van zaken en de ervaringen van betrokkenen bij het experiment vormen dan ook de kern van het onderzoek. Daarnaast wordt getracht vast te stellen of en in hoeverre de verhoorsituatie verandert door de hierboven besproken aanpassingen. De centrale vraagstelling van het onderzoek luidt: Hoe verlopen de eerste politieverhoren met voorafgaande consultatie en aanwezigheid van de advocaat en wat zijn feitelijke waarneembare gevolgen van de consultatie en de aanwezigheid op het verloop van het verhoor? [.....] Als laatste kan aangestipt worden dat er inmiddels een nieuwe situatie is ontstaan naar aanleiding van jurisprudentie van het EHRM en de HR. Het is interessant te bezien in welke mate de conclusies uit dit onderzoek stand houden in de context van die ontwikkelingen. Met andere woorden: in hoeverre hebben we te maken met blijvende effecten? Dit zal over enkele jaren moeten blijken en het laatste woord over de uitbreiding van het bijstandsrecht tijdens verdachtenverhoren zal zeker nog niet gesproken zijn

    Numerical modelling of the filling of formworks with self-compacting concrete

    Get PDF
    This paper describes the numerical modelling of the flow of self-compacting concrete (SCC) in column and wall formworks during the filling process. It is subdivided in four main parts. In the first part, the rheological properties of SCC and the theory regarding the pressure exerted by the SCC on the formworks are shortly described. In the second part, the formwork filling tests, that have been carried out at the Magnel Laboratory for Concrete Research of the Ghent University, are presented. The general layout of the tests and the measurement set-up are clearly described. In the third part, the numerical modelling of the flow of SCC using a commercially available solver is explained as well as the obtained results from the CFD simulations. Finally in the last part, a comparison is made between the measurements and the simulation results. The formwork pressures are hydrostatic for SCC pumped from the base of the formworks

    Fragile Spectral and Temporal Auditory Processing in Adolescents with Autism Spectrum Disorder and Early Language Delay

    Get PDF
    We investigated low-level auditory spectral and temporal processing in adolescents with autism spectrum disorder (ASD) and early language delay compared to matched typically developing controls. Auditory measures were designed to target right versus left auditory cortex processing (i.e. frequency discrimination and slow amplitude modulation (AM) detection versus gap-in-noise detection and faster AM detection), and to pinpoint the task and stimulus characteristics underlying putative superior spectral processing in ASD. We observed impaired frequency discrimination in the ASD group and suggestive evidence of poorer temporal resolution as indexed by gap-in-noise detection thresholds. These findings question the evidence of enhanced spectral sensitivity in ASD and do not support the hypothesis of superior right and inferior left hemispheric auditory processing in ASD.University of Leuven. Research Council (Grant IDO/08/013
    • …
    corecore